New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
8256155: Allow multiple large page sizes to be used on Linux #1153
Conversation
Hi @mgkwill, welcome to this OpenJDK project and thanks for contributing! We do not recognize you as Contributor and need to ensure you have signed the Oracle Contributor Agreement (OCA). If you have not signed the OCA, please follow the instructions. Please fill in your GitHub username in the "Username" field of the application. Once you have signed the OCA, please let us know by writing If you already are an OpenJDK Author, Committer or Reviewer, please click here to open a new issue so that we can record that fact. Please use "Add GitHub user mgkwill" as summary for the issue. If you are contributing this work on behalf of your employer and your employer has signed the OCA, please let us know by writing |
/covered |
Thank you! Please allow for a few business days to verify that your employer has signed the OCA. Also, please note that pull requests that are pending an OCA check will not usually be evaluated, so your patience is appreciated! |
/label hotspot-gc |
@mgkwill |
Hi and welcome :) I haven't started reviewing the code in detail but a first quick glance raised a couple of questions/comments:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
this seems like a improvement for a very specific scenario (2M instead of 1G pages on x86(?) Linux(?)). At the moment this feels more like an early prototype. The lack of comments/documentation is not helping.
Both JBS and PR are a bit taciturn. It would help if you could elaborate a bit. E.g. is this just for Linux? for x86 only? since the ticket talks about 4K pages, which are not universal across all architectures.
I can glean some of what you want to do from the patch itself, but the spec is vague so there is no way to verify if the patch matches the spec.
What does page size have to do with exec permission? This should not be tied to exec. The whole patch should not contain the word "exec" :)
Why is this proposal hard coded to 2M pages?
What memory regions are supposed to be affected by this? JBS ticket talks about "code, card table and other".
One problem I see is that the notion of "we have a small standard page and a single large pinned page size" is - I believe - baked in into a few places. Are there any places where an implicit assumption of the page size or their "pinned-ness" could break things now (see also below remark about UseSHM)? For instance, are these pages pinned on all our platforms, and if no, could code be affected which commits/uncommits and assumes a certain page size?
What tests have you run? On what platforms? Also platforms with different page sizes? How well did you test UseSHM?
The latter is interesting because arguably there is the bigger behavioral change. TLBFS path was using a mixture of large and small pages anyway, so adding another page size into the mix is not a big stretch. But for SHM, things would change: where before reserve_memory_special would return NULL and we'd invoke fallback reservation, now we return a region consisting of 2M pinned pages.
For SHM, I think you need to make sure that alignment matches SHMLBA?
It was not clear from the patch or the JBS item whether you propose to change the semantics of LargePageSizeInBytes. E.g. what happens if the value specified explicitely is smaller than your exec_page size? Your patch seems to give preference to exec_page_size. But this would be a behavioral change, and may need a CSR.
Finally, comments would be nice. Clear API specs. Extending regression tests for reserve_memory_special would be good too, to test the new behavior (for gtests examples, see test/hotspot/gtest/runtime in the source folder).
The linux-2m-page-specific code in the platform-generic G1 test seems wrong.
Cheers, Thomas
Hi Thomas, Thanks so much for your review. Please bear with me as this if my first patch to JDK community. But I have pushed patches to other open source communities (OpenDaylight, ONAP, OpenStack) and worked as a committer in some. Responses below inline:
I appreciate the feedback. Perhaps the lack of detail in the pull request/JDK issue is a function of my zoomed focus on the specific purpose and lack of understanding about how much detail is normally included. The purpose of the patch/issue is to enable code hugepage memory reservations on Linux when the JDK is configured with 1G hugepages (LargePages in JDK parlance). To my knowledge, in most cases currently code memory is reserved in default page size of the system when using 1G LargePages because it does not require 1G or larger reservations. In modern Linux variants default page size seems to be 4k on x86_64. In other architectures it could be up to 64k. The purpose of the patch is to enable the use of smaller LargePages for reservations less than 1G when LargePages are enabled and 1G is set as LargePageSizeInBytes, so as not to fall back to 4k-64k pages for these reservations.
I'd appreciate any advice on writing a less vague spec. I have used exec as a stand-in for code memory reservations in my descriptions, mostly due to the fact that a 'bool exec' is used in functions that reserve HugePages and this later is translated into 'PROT_EXEC' when mmap is called, "exec" is passed in but not used in SHM. These are the particular memory reservations we wanted the patch to affect when using 1G LargePages. However I will remove those references if unwarranted.
To my knowledge 2M is the smallest large pages size supported by Linux at the moment. Hardcoding 2M pages was an attempt to simplify the reservation of code memory using LargePages. Also it was an implementation suggestion from some Oracle engineers on the topic, though my implementation of those suggestions could be suspect. Perhaps we should detect the smallest large page size in _page_sizes array that will fit the requested memory amount. However populating _page_sizes array is complicated by the fact that the current jdk code relies heavily on the default large page size, almost as if that is the only size that can be used. 2M large page sizes are the default default_large_page_size in Linux and should be available even if one configures the default_large_page_size to 1G.
Code is the target memory region. However there are some other instances where large_page reservation is happening due to the addition of 2M pages as an option. Some calls fail and error when adding 2M pages to _page_sizes array in the company of 1G pages. See line https://github.com/openjdk/jdk/pull/1153/files/daba99ac5f46dadb263caafa7ff87566d6d7dc58#diff-aeec57d804d56002f26a85359fc4ac8b48cfc249d57c656a30a63fc6bf3457adR3790 However at https://github.com/openjdk/jdk/pull/1153/files/daba99ac5f46dadb263caafa7ff87566d6d7dc58#diff-aeec57d804d56002f26a85359fc4ac8b48cfc249d57c656a30a63fc6bf3457adR4077 So in
My architecture knowledge outside of x86_64 is limited. I've been looking into this and thinking about it and I will have some more comments in the next day or so. For UseSHM I ran some negative tests but I will do some more rigorous testing and report back.
Looking into this.
I'm open to removing exec references and just enabling multiple page sizes which would allow for 2M pages to be used by code memory reservations.
Thanks. I will push an updated patch w/ comments. Will attempt clear API spec and regression tests for reserve_memory_special but will need some guidance on those.
Any advice here. My change specifically changes the behavior of the pages returned in the test for linux platforms but should not have effects on other platforms. I don't know how this would generally happen for JDK tests in this case. It seems to me that the JDK will act differently on different platforms. How is this normally handled?
Thanks again for the review. |
Hi Stefan, Thanks so much for your review.
To my knowledge 2M is the smallest large pages size supported by Linux at the moment. Hardcoding 2M pages was an attempt to simplify the reservation of code memory using LargePages. In most cases currently code memory is reserved in default page size of the system when using 1G LargePages because it does not require 1G or larger reservations. In modern Linux variants default page size seems to be 4k on x86_64. In other architectures it could be up to 64k. The purpose of the patch is to enable the use of smaller LargePages for reservations less than 1G when LargePages are enabled and 1G is set as LargePageSizeInBytes, so as not to fall back to 4k-64k pages for these reservations. Perhaps I should just select the page size <= bytes requested and remove 'exec' special case.
os::default_large_page_size() will not necessarily be small enough for code memory reservations if the os::default_large_page_size() = 1G, in those cases we would get 4k on most linux x86_64 variants. My attempt is to ensure the smallest large_page_size availabe is used for code memory reservations. Perhaps my 2M hardcoding was a mistake and I should discover this size and select it based on the bytes being reserved. |
Pushed a new patch removing exec references and instead use large page size based on requested memory size bytes. Moved added definitions from os to os::Linux. More work/research in progress. Add 2M LargePages to _page_sizes
|
Use 2m pages for large page requests less than 1g on linux when 1G are default pages - Add os::Linux::large_page_size_2m() that returns 2m as size - Add os::Linux::select_large_page_size() to return correct large page size for size_t bytes - Add 2m size to _page_sizes array - Update reserve_memory_special methods to set/use large_page_size based on bytes reserved - Update large page not reserved warnings to include large_page_size attempted - Update TestLargePageUseForAuxMemory.java to expect 2m large pages in some instances Signed-off-by: Marcus G K Williams <marcus.williams@intel.com>
Yes, I see no reason to keep that special case and we want to keep this code as general as possible. Looking at the code in
You are correct that the default size might indeed be 1G, so using something like I suggest above to figure out the available page sizes and then using an appropriate one given the size of the mapping sounds good. Please also avoid force pushing changes to open PRs since it makes it harder to follow what changes between updates. It is fine for a PR to contain multiple commits and if you need to update with things from the main branch you should merge rather than rebase. Cheers, |
Hi Markus, thanks, and a belated welcome! Some initial background: We at SAP are maintainers for a number of ports, among others AIX and linux ppc/s390 as well as some propietary ones (e.g. HPUX or ia64). So I wear my platform glasses when looking at this code. IMHO the virtual memory layer in hotspot - os::reserve_memory() and all its friends - could do with a revamp. At least a consistent API documentation :-/. Supposed to be an API-independent abstraction, its facade breaks in many places. See e.g. JDK-8255978, JDK-8253649 (all windows), AIX sysV shmem handling, @AntonKozlov's valiant attempt to add MAP_JIT code heap reservation on MacOS (#294), or the relative difficulty with which support for JEP 316 (from Intel) had been added. Hence my initial caution. Every new feature increases complexity for us maintainers. Especially if it continues the bad tradition of not documenting or commenting anything. Since I do not know whether Intel sticks around to maintain this contribution (bit of a mixed track record there, see e.g. JDK-8256181), we must plan on maintenance falling to us. That said, now that I understand better what you want to do, your plan certainly makes sense and is useful. One of the more pressing concerns I have is that the changes to reserve_memory() would somehow be observable from the outside and/or leak back into the os layer when calling os::commit_memory/uncommit_memory/release_memory. This is the case with @AntonKozlov's MAP_JIT change: it requires a matching commit call in os::commit_memory() to be made for executable memory allocated with os::reserve_memory(), and therefore exposed one weakness of the os::reserve_memory() API, that its very difficult to pass along meta information about memory mappings. I think this is not the case here, but I'm not sure and we should be sure. More remarks inline.
Please beef up the JBS issue a bit. If you do not have access to it, you can send the text to me I will update it. Or even easier, just update the PR description and we copy the text to the JBS. JBS tickets are supposed to keep information about what we did and why for a long time. When formulating the text, just imagine the reader to be someone in the future with general knowledge in your field but without particular knowledge about this very case. I know this is a vague description though; for an example, see e.g. https://bugs.openjdk.java.net/browse/JDK-8255978.
Right, and as Stefan suggested, this should be kept more "fluid" and not be hard coded to 2M, nor to just one additional large page. Maybe the system has four page sizes (our propietary HPUX has that, not that it matters here).
We need to decide on whether we want to do this for the code heap only or for every reservation done with reserve_memory_special (I really dislike that name btw). In your proposal you "piggyback" on the exec property as a standin for "code heap", which is not clean and also not necessarily true. So: a) If we only want to do this for the code heap, we could think about creating an own API for allocating the code heap. E.g. os::reserve_code_space() and os::release_code_space(). This is one of the ideas @AntonKozlov came up with to circumvent the need for a fully fledged revamp of these APIs while still being able to move his PR forward. b) If we want to do this for all callers of reserve_memory_special(), we should also remove any mention of "exec" and just implement that. I currently favour (b) but would like to know opinions of others.
Okay. We do not expect every contributor to have exotic test machines, but this means we will have to do that testing. We need to know to plan in these efforts.
When I write API specs I basically mean "new code should comment better". That can be as simple as a one liner above your os::Linux::select_large_page_size() function. About regression tests, we have a google-test suite (see test/hotspot/gtest) which would be the appropiate point to put in tests.
I defer to the G1 folks for that.
Sure. Thanks for the much more clear information. Cheers, Thomas |
Use 2m pages for large page requests less than 1g on linux when 1G are default pages - Add os::Linux::large_page_size_2m() that returns 2m as size - Add os::Linux::select_large_page_size() to return correct large page size for size_t bytes - Add 2m size to _page_sizes array - Update reserve_memory_special methods to set/use large_page_size based on bytes reserved - Update large page not reserved warnings to include large_page_size attempted - Update TestLargePageUseForAuxMemory.java to expect 2m large pages in some instances Signed-off-by: Marcus G K Williams <marcus.williams@intel.com>
Hi Markus, the more I think about this the more I think it your proposal makes sense. In my opinion I would do it transparently for reserve_memory_special() (so, not tied to code heap). Maybe one way to simplify this and move it forward would be to just do it for UseHugeTLBFS, and leave the UseSHM path unchanged. I consider this less risky since with UseHugeTLBFS we already reserve spaces with mixed page sizes and that seems to work - so here, callers already are wrong if they make any assumptions about the underlying page size. Note that UseHugeTLBFS is the default if +UseLargePages is set. Just my 5 cent. Cheers, Thomas |
I agree with what Thomas is saying. This should be a generic thing for reservations, as I've suggested before, choosing the largest page size given the size of the mapping. I would also be good with starting with the When it comes to testing, we should not hard code these kind of things in the test, but add WhiteBox functions that return the correct numbers given the platform and environment.
So instead of hard coding this, I guess the correct approach would be to return an array of available page sizes and verify that the correct one is used. |
I honestly don't even know why we have UseSHM. Seems redundant, and since it uses SystemV shared memory which has a different semantics from mmap, it is subtly broken in a number of places (eg https://bugs.openjdk.java.net/browse/JDK-8257040 or https://bugs.openjdk.java.net/browse/JDK-8257041). |
One thing I stumbled upon while looking at this code is why the CodeHeap always wants to have at least 8 pages covering its range:
which means that for a wish pagesize of 1G, the code cache would have to cover at least 8G. I am not even sure this is possible, isn't it limited to 4G? Anyway, they don't uncommit. And the comment in codecache.cpp indicates this is to be able to step-wise commit, but with huge pages the space is committed right from the start anyway. So I do not see what good these 8 pages do. If we allowed the CodeCache to use just one page, it could be 1G in size and use a single 1G page. Note that there are similar min_page_size requests in GC, but I did not look closer into them. Also, this does not take away the usefulness of this proposal. |
Thanks @tstuefe. Taking a look now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some additional comments. The update to globals.hpp
stated in the CSR should also be included in this change:
diff --git a/src/hotspot/share/runtime/globals.hpp b/src/hotspot/share/runtime/globals.hpp
index cfe5fd8116c..2d09075fb48 100644
--- a/src/hotspot/share/runtime/globals.hpp
+++ b/src/hotspot/share/runtime/globals.hpp
@@ -239,7 +239,8 @@ const intx ObjectAlignmentInBytes = 8;
"Use intrinsics for java.util.Base64") \
\
product(size_t, LargePageSizeInBytes, 0, \
- "Large page size (0 to let VM choose the page size)") \
+ "Maximum large page size used (0 will use the default large " \
+ "page size for the environment as the maximum)") \
range(0, max_uintx) \
\
product(size_t, LargePageHeapSizeThreshold, 128*M, \
src/hotspot/os/linux/os_linux.cpp
Outdated
} else { | ||
log_info(pagesize)("Large page size (" SIZE_FORMAT "%s) failed sanity check " | ||
"checking if smaller large page sizes are usable", | ||
byte_size_in_exact_unit(page_size), | ||
exact_unit_for_byte_size(page_size)); | ||
for (size_t page_size_ = _page_sizes.next_smaller(page_size); | ||
page_size_ != (size_t)os::vm_page_size(); | ||
page_size_ = _page_sizes.next_smaller(page_size_)) { | ||
flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_HUGETLB | hugetlbfs_page_size_flag(page_size_); | ||
p = mmap(NULL, page_size_, PROT_READ|PROT_WRITE, flags, -1, 0); | ||
if (p != MAP_FAILED) { | ||
// Mapping succeeded, sanity check passed. | ||
munmap(p, page_size_); | ||
log_info(pagesize)("Large page size (" SIZE_FORMAT "%s) passed sanity check", | ||
byte_size_in_exact_unit(page_size_), | ||
exact_unit_for_byte_size(page_size_)); | ||
return true; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, something like this. I might propose a follow up to change this slightly, but I first need to think about how we really should handle this.
Signed-off-by: Marcus G K Williams <marcus.williams@intel.com>
Hi @mgkwill , since I have two weeks of vacations upcoming. I am fine with the change in its current form, if you take my suggested change (UseLargePages=0 if no default large page size was found). With my addition, the tests at SAP ran through on all our architectures, I could not spot any errors attributable to this patch. Any follow up issues we have not spotted yet we can fix as followups. If you have no time, thats fine too; I'll be back second week of june. Cheers, Thomas |
@tstuefe Working on an update now. Stay tuned. |
Signed-off-by: Marcus G K Williams <marcus.williams@intel.com>
@tstuefe incorporated your suggestion with @kstefanj updates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me now.
Great job, @mgkwill , thanks a lot for sticking it out and having the patience to work for a good solution! I think the resulting code, including all the preparatory patches by Stefan, looks way better now and is really better to maintain.
Cheers, Thomas
@mgkwill This change now passes all automated pre-integration checks. ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details. After integration, the commit message for the final commit will be:
You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed. At the time when this comment was updated there had been 2 new commits pushed to the
Please see this link for an up-to-date comparison between the source branch of this pull request and the As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@tstuefe, @walulyai, @kstefanj) but any other Committer may sponsor as well. ➡️ To flag this PR as ready for integration with the above commit message, type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I want to re-iterate what Thomas said, big thanks for having patience and letting it take time.
I will run this through testing over the night, so hopefully we can integrate this tomorrow. If you like you can integrate now and I will issue the sponsor command tomorrow.
@tstuefe @kstefanj Thank you for helping me get up to speed on contributing to openjdk as a newbie and for all of the review, suggestions and guidance on getting this patch to where it is today. The patch accomplishes the original goal and is leaps and bounds more readable, better implemented and maintainable. Thank you both also for the patches merged, the CSR and all the effort to prepare for this change and make it better. I'll comment to integrate today, with knowledge that sponsorship is dependent on the testing Stefan is running. Thanks again! |
/integrate |
I've done the final testing and it looks good. The testing in out CI doesn't show any new failures and my manual runs testing different page sizes also looks good. We have a few tests that needs to be fixed to be run with 1g pages, but that is not new with this patch. So I consider this ready for integration. /sponsor |
/sponsor |
@kstefanj @mgkwill Since your change was applied there have been 14 commits pushed to the
Your commit was automatically rebased without conflicts. Pushed as commit 94cfeb9. 💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored. |
@kstefanj The command |
Change the meaning of LargePageSizeInBytes to be the maximum large page size the JVM may use (not the only one). A default value of zero will mean to allow the JVM use large page sizes up to the system's default large page size.
Progress
Issue
Reviewers
Contributors
<mgkwill@openjdk.org>
<sjohanss@openjdk.org>
<stuefe@openjdk.org>
Reviewing
Using
git
Checkout this PR locally:
$ git fetch https://git.openjdk.java.net/jdk pull/1153/head:pull/1153
$ git checkout pull/1153
Update a local copy of the PR:
$ git checkout pull/1153
$ git pull https://git.openjdk.java.net/jdk pull/1153/head
Using Skara CLI tools
Checkout this PR locally:
$ git pr checkout 1153
View PR using the GUI difftool:
$ git pr show -t 1153
Using diff file
Download this PR as a diff file:
https://git.openjdk.java.net/jdk/pull/1153.diff